主动推断是建模大脑的最新框架,该框架解释了各种机制,例如习惯形成,多巴胺能排出和好奇心。最近,已经开发了基于蒙特卡洛树搜索的两个版本的分支时间活动推理(BTAI),以处理在计算所有可能的策略之前,直到时间范围的所有可能的策略时,都会发生指数(时空和时间)的复杂性类别。但是,这两个版本的BTAI仍然遭受指数复杂性类W.R.T的损失。在本文中,我们首先允许对几个观测值进行建模来解决此限制,每个观察都有其自己的可能性映射。同样,我们允许每个潜在状态都有自己的过渡映射。然后,推论算法利用了可能性和过渡映射的分解以加速后验计算。在DSPRITES环境上测试了这两个优化,其中DSPRITES数据集的元数据被用作模型的输入,而不是DSPRITES图像。在此任务上,$ btai_ {vmp} $(Champion等,2022b,a)能够在5.1秒内解决96.9 \%的任务,而$ btai_ {bf} $(Champion等,2021a)是能够在17.5秒内解决98.6 \%的任务。我们的新方法($ btai_ {3mf} $)通过仅在2.559秒内完整求解任务(100 \%),超过了其两个前任。最后,$ btai_ {3mf} $已在灵活且易于使用(Python)软件包中实现,我们开发了一个图形用户界面,以实现对模型信念,计划过程和行为的检查。
translated by 谷歌翻译
分支时间有源推论(Champion等,2021b,a)是一个框架,提议将规划视为贝叶斯模型扩展的形式。它的根源可以在有源推理中找到(Friston等,2016; Da Costa等,2020;冠军等,2021C),一种广泛用于脑建模的神经科学框架,以及蒙特卡罗树搜索(布朗等人,2012),一种广泛应用于加强学习文学的方法。到目前为止,通过利用变形消息通过(WinN和Bishop,2005)提供的灵活性来执行潜在变量的推断,该迭代过程可以被理解为沿着因子图的边缘发送消息(福尼,2001年)。在本文中,我们利用了替代方法的推理效率称为贝叶斯滤波(Fox等,2003),其不需要更新方程的迭代,直到变分自由能的收敛。相反,该方案在两个阶段交替交替:整合证据和未来国家的预测。这两个相可以有效地执行,并且这提供了通过最先进的七十倍的加速。
translated by 谷歌翻译
积极推断是一种用于建模大脑的最先进的框架,用于建立广泛的机制,例如习惯形成,多巴胺能排放和好奇心。然而,当在所有可能的策略上计算到达时间范围之前,最近的实现遭受指数(空间和时间)复杂性等级。 fountas等人。 (2020)使用Monte Carlo树搜索解决这个问题,导致两种不同的任务中的非常好的结果。此外,冠军等人。 (2021A)提出了一种基于结构学习的树搜索方法。这是通过开发通过激活推理方法的变形消息(Champion等,2021b)的变分数,这使得能够对贝叶斯网络的组成构建进行积极推理。然而,这条消息通过树搜索方法,我们呼叫分支时间有源推断(BTAI),从未经验过测试。在本文中,我们在迷宫溶解剂的背景下提出了对方法(Champion等,2021A)的实验研究。在这种情况下,我们表明,改进的先前偏好和更深的搜索都有助于减轻局部最小值的漏洞。然后,我们将BTAI与图形导航任务的标准活动推理(AI)进行比较。我们表明,对于小图形,BTAI和AI都成功解决了任务。对于较大的图表,AI展示了指数(空间)复杂性等级,使得该方法是棘手的。但是,BTAI更有效地探讨了策略的空间,成功地缩放到更大的图形。
translated by 谷歌翻译
在过去的10到15年中,积极的推论有助于解释从习惯形成到多巴胺能放电甚至建模好奇心的各种脑机制。然而,当在将所有可能的策略上计算到时间范围内的所有可能的策略时,当前实现遭受指数(空间和时间)复杂性等级。 Fountas等人(2020)使用Monte Carlo树搜索解决这个问题,导致两个不同的任务中的令人印象深刻的结果。在本文中,我们提出了一种替代框架,其旨在通过铸造规划作为结构学习问题来统一树搜索和有效推论。然后呈现两个树搜索算法。首先将预期的自由能量及时向前传播(即,朝向叶子),而第二次向后传播(即,朝向根)。然后,我们证明前向和后向传播分别与主动推断和复杂的推断相关,从而阐明了这两个规划策略之间的差异。
translated by 谷歌翻译
We consider the task of text generation in language models with constraints specified in natural language. To this end, we first create a challenging benchmark Cognac that provides as input to the model a topic with example text, along with a constraint on text to be avoided. Unlike prior work, our benchmark contains knowledge-intensive constraints sourced from databases like Wordnet and Wikidata, which allows for straightforward evaluation while striking a balance between broad attribute-level and narrow lexical-level controls. We find that even state-of-the-art language models like GPT-3 fail often on this task, and propose a solution to leverage a language model's own internal knowledge to guide generation. Our method, called CognacGen, first queries the language model to generate guidance terms for a specified topic or constraint, and uses the guidance to modify the model's token generation probabilities. We propose three forms of guidance (binary verifier, top-k tokens, textual example), and employ prefix-tuning approaches to distill the guidance to tackle diverse natural language constraints. Through extensive empirical evaluations, we demonstrate that CognacGen can successfully generalize to unseen instructions and outperform competitive baselines in generating constraint conforming text.
translated by 谷歌翻译
Naturally-occurring information-seeking questions often contain questionable assumptions -- assumptions that are false or unverifiable. Questions containing questionable assumptions are challenging because they require a distinct answer strategy that deviates from typical answers to information-seeking questions. For instance, the question "When did Marie Curie discover Uranium?" cannot be answered as a typical when question without addressing the false assumption "Marie Curie discovered Uranium". In this work, we propose (QA)$^2$ (Question Answering with Questionable Assumptions), an open-domain evaluation dataset consisting of naturally-occurring search engine queries that may or may not contain questionable assumptions. To be successful on (QA)$^2$, systems must be able to detect questionable assumptions and also be able to produce adequate responses for both typical information-seeking questions and ones with questionable assumptions. We find that current models do struggle with handling questionable assumptions -- the best performing model achieves 59% human rater acceptability on abstractive QA with (QA)$^2$ questions, leaving substantial headroom for progress.
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
translated by 谷歌翻译
We test grip strength and shock absorption properties of various granular material in granular jamming robotic components. The granular material comprises a range of natural, manufactured, and 3D printed material encompassing a wide range of shapes, sizes, and Shore hardness. Two main experiments are considered, both representing compelling use cases for granular jamming in soft robotics. The first experiment measures grip strength (retention force measured in Newtons) when we fill a latex balloon with the chosen grain type and use it as a granular jamming gripper to pick up a range of test objects. The second experiment measures shock absorption properties recorded by an Inertial Measurement Unit which is suspended in an envelope of granular material and dropped from a set height. Our results highlight a range of shape, size and softness effects, including that grain deformability is a key determinant of grip strength, and interestingly, that larger grain sizes in 3D printed grains create better shock absorbing materials.
translated by 谷歌翻译
Granular jamming has recently become popular in soft robotics with widespread applications including industrial gripping, surgical robotics and haptics. Previous work has investigated the use of various techniques that exploit the nature of granular physics to improve jamming performance, however this is generally underrepresented in the literature compared to its potential impact. We present the first research that exploits vibration-based fluidisation actively (e.g., during a grip) to elicit bespoke performance from granular jamming grippers. We augment a conventional universal gripper with a computer-controllled audio exciter, which is attached to the gripper via a 3D printed mount, and build an automated test rig to allow large-scale data collection to explore the effects of active vibration. We show that vibration in soft jamming grippers can improve holding strength. In a series of studies, we show that frequency and amplitude of the waveforms are key determinants to performance, and that jamming performance is also dependent on temporal properties of the induced waveform. We hope to encourage further study focused on active vibrational control of jamming in soft robotics to improve performance and increase diversity of potential applications.
translated by 谷歌翻译